A diffusion auction is a market to sell commodities over a social network, where the challenge is to incentivize existing buyers to invite their neighbors in the network to join the market. Existing mechanisms have been designed to solve the challenge in various settings, aiming at desirable properties such as non-deficiency, incentive compatibility and social welfare maximization. Since the mechanisms are employed in dynamic networks with ever-changing structures, buyers could easily generate fake nodes in the network to manipulate the mechanisms for their own benefits, which is commonly known as the Sybil attack. We observe that strategic agents may gain an unfair advantage in existing mechanisms through such attacks. To resist this potential attack, we propose two diffusion auction mechanisms, the Sybil tax mechanism (STM) and the Sybil cluster mechanism (SCM), to achieve both Sybil-proofness and incentive compatibility in the single-item setting. Our proposal provides the first mechanisms to protect the interests of buyers against Sybil attacks with a mild sacrifice of social welfare and revenue.
translated by 谷歌翻译
节流是当今在线广告市场中最受欢迎的预算控制方法之一。当一个受预算受限的广告商雇用节流功能时,她可以在广告平台建议出价后选择是否参加拍卖。本文重点介绍了从理论观点重复的第二价格拍卖中的动态预算节流过程。潜在问题的一个重要特征是,广告商不知道进入市场时竞争最高的出价。为了模拟消除这种不确定性的困难,我们考虑了两种不同的信息结构。广告商可以通过全信息反馈获得每轮竞争最高的投标。同时,通过部分信息反馈,广告商只能在她参加的拍卖中获得最高竞争的出价。我们提出了OGD-CB算法,该算法涉及在线广告查询面临的同时分配学习和收入优化。在这两种情况下,我们都证明该算法保证了$ O(\ sqrt {t \ log t})$遗憾,概率$ 1- o(1/t)$相对于流体自适应节流基准。通过证明$ \ omega(\ sqrt {t})$的下限在最小的后悔中,即使是最佳的最佳选择,我们就建立了算法的近乎最佳性。最后,我们将节流的最佳流体最佳与起搏相提并论,这是另一种广泛采用的预算控制方法。这些基准的数值关系使我们对不同的在线算法进行预算管理的比较有了进一步的见解。
translated by 谷歌翻译
要利用战略承诺,这是玩游戏的有用策略,领导者必须学习有关追随者的回报功能的足够信息。但是,这使追随者有机会提供虚假信息并影响最终的游戏结果。通过对学习领导者的精心虚假的回报功能,与他的真实行为相比,追随者可能会引起更多使他受益的结果。我们通过广泛的游戏中这种战略行为研究追随者的最佳操纵。追随者的不同态度被考虑在内。乐观的追随者在所有游戏成果中最大限度地发挥了他的真实用途,这些效用可以由某些回报功能引起。悲观的追随者只考虑了导致独特游戏结果的错误报告的回报功能。对于本文中考虑的所有设置,我们表征了可以成功诱导的所有可能的游戏结果。我们证明,追随者可以找到误会其私人收益信息的最佳方法是多项式时间的。我们的工作完全解决了该追随者在广泛的游戏树上的最佳操纵问题。
translated by 谷歌翻译
在这项工作中,证明了功能$ f $的收敛引理是分析映射的有限组成和最大运算符。引理表明,$ \ delta $ - 定位点附近附近的隔离本地最小点$ x^*$正在收缩到$ x^*$,为$ \ delta \ to 0 $。它是强烈凸出$ c^1 $函数的版本的自然扩展。但是,引理的正确性是微妙的。分析映射对于诱饵是必要的,因为用可区分或$ c^\ infty $映射代替它会导致引理错误。该证明基于{\ l} ojasiewicz的半分析集的分层定理。此证明的扩展显示了$ f $的一组固定点的几何表征。最后,提出了在固定点上的稳定性概念,称为收敛稳定性。它询问,在小数字错误下,合理的收敛优化方法是否在固定点附近开始应最终收敛到同一固定点。仅当目标函数既非滑动和非概念),趋同稳定性的概念在质量上变得无处不在。通过收敛引理,证明了$ F $的收敛稳定性的直观等效条件。这些结果共同提供了一个新的几何观点,可以研究非平滑非凸优化中“何处连接”的问题。
translated by 谷歌翻译
拍卖设计中的主要问题之一是开发一种兼容激励兼容的机制,可最大程度地提高拍卖师的预期收入。尽管理论方法在多项目拍卖中遇到了瓶颈,但最近在通过深度学习找到最佳机制方面取得了很多进展。但是,这些作品要么着重于固定的竞标者和项目,要么将拍卖限制为对称。在这项工作中,我们通过将投标人和项目的上下文信息考虑到拍卖学习框架中来克服此类限制。我们提出了$ \ mathtt {Citransnet} $,这是一种基于上下文集成变压器的神经网络,用于最佳拍卖设计,该网络在竞标和上下文上保持了置换率 - 等值,同时能够找到不对称的解决方案。我们通过广泛的实验表明,$ \ mathtt {citransnet} $可以在单项设置中恢复已知的最佳解决方案,在多项目拍卖中优于强大的基线,并且可以很好地推广到培训中的案例以外的其他案例。
translated by 谷歌翻译
在拍卖领域,了解重复拍卖中学习动态的收敛属性是一个及时,重要的问题,例如在线广告市场中有许多应用程序。这项工作着重于重复的首次价格拍卖,该物品具有固定值的竞标者学会使用基于平均值的算法出价 - 大量的在线学习算法,其中包括流行的无regret算法,例如多重权重更新,并遵循扰动的领导者。我们完全表征了基于均值算法的学习动力学,从收敛到拍卖的NASH平衡方面,具有两种感觉:(1)时间平均水平:竞标者在bidiper the NASH平衡方面的回合分数,在极限中均在极限中。 ; (2)最后一题:竞标者的混合策略概况接近限制的NASH平衡。具体而言,结果取决于最高值的投标人的数量: - 如果数量至少为三个,则竞标动力学几乎可以肯定地收敛到拍卖的NASH平衡,无论是在时间平时还是在最后近期的情况下。 - 如果数字为两个,则竞标动力学几乎可以肯定会在时间平时收敛到NASH平衡,但不一定在最后近期。 - 如果数字是一个,则竞标动力学可能不会在时间平均值或最后近期的时间内收敛到NASH平衡。我们的发现为学习算法的融合动力学研究开辟了新的可能性。
translated by 谷歌翻译
Different people speak with diverse personalized speaking styles. Although existing one-shot talking head methods have made significant progress in lip sync, natural facial expressions, and stable head motions, they still cannot generate diverse speaking styles in the final talking head videos. To tackle this problem, we propose a one-shot style-controllable talking face generation framework. In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio. Specifically, we first develop a style encoder to extract dynamic facial motion patterns of a style reference video and then encode them into a style code. Afterward, we introduce a style-controllable decoder to synthesize stylized facial animations from the speech content and style code. In order to integrate the reference speaking style into generated videos, we design a style-aware adaptive transformer, which enables the encoded style code to adjust the weights of the feed-forward layers accordingly. Thanks to the style-aware adaptation mechanism, the reference speaking style can be better embedded into synthesized videos during decoding. Extensive experiments demonstrate that our method is capable of generating talking head videos with diverse speaking styles from only one portrait image and an audio clip while achieving authentic visual effects. Project Page: https://github.com/FuxiVirtualHuman/styletalk.
translated by 谷歌翻译
Masked image modeling (MIM) has shown great promise for self-supervised learning (SSL) yet been criticized for learning inefficiency. We believe the insufficient utilization of training signals should be responsible. To alleviate this issue, we introduce a conceptually simple yet learning-efficient MIM training scheme, termed Disjoint Masking with Joint Distillation (DMJD). For disjoint masking (DM), we sequentially sample multiple masked views per image in a mini-batch with the disjoint regulation to raise the usage of tokens for reconstruction in each image while keeping the masking rate of each view. For joint distillation (JD), we adopt a dual branch architecture to respectively predict invisible (masked) and visible (unmasked) tokens with superior learning targets. Rooting in orthogonal perspectives for training efficiency improvement, DM and JD cooperatively accelerate the training convergence yet not sacrificing the model generalization ability. Concretely, DM can train ViT with half of the effective training epochs (3.7 times less time-consuming) to report competitive performance. With JD, our DMJD clearly improves the linear probing classification accuracy over ConvMAE by 5.8%. On fine-grained downstream tasks like semantic segmentation, object detection, etc., our DMJD also presents superior generalization compared with state-of-the-art SSL methods. The code and model will be made public at https://github.com/mx-mark/DMJD.
translated by 谷歌翻译
Recently, great progress has been made in single-image super-resolution (SISR) based on deep learning technology. However, the existing methods usually require a large computational cost. Meanwhile, the activation function will cause some features of the intermediate layer to be lost. Therefore, it is a challenge to make the model lightweight while reducing the impact of intermediate feature loss on the reconstruction quality. In this paper, we propose a Feature Interaction Weighted Hybrid Network (FIWHN) to alleviate the above problem. Specifically, FIWHN consists of a series of novel Wide-residual Distillation Interaction Blocks (WDIB) as the backbone, where every third WDIBs form a Feature shuffle Weighted Group (FSWG) by mutual information mixing and fusion. In addition, to mitigate the adverse effects of intermediate feature loss on the reconstruction results, we introduced a well-designed Wide Convolutional Residual Weighting (WCRW) and Wide Identical Residual Weighting (WIRW) units in WDIB, and effectively cross-fused features of different finenesses through a Wide-residual Distillation Connection (WRDC) framework and a Self-Calibrating Fusion (SCF) unit. Finally, to complement the global features lacking in the CNN model, we introduced the Transformer into our model and explored a new way of combining the CNN and Transformer. Extensive quantitative and qualitative experiments on low-level and high-level tasks show that our proposed FIWHN can achieve a good balance between performance and efficiency, and is more conducive to downstream tasks to solve problems in low-pixel scenarios.
translated by 谷歌翻译
Rigorous guarantees about the performance of predictive algorithms are necessary in order to ensure their responsible use. Previous work has largely focused on bounding the expected loss of a predictor, but this is not sufficient in many risk-sensitive applications where the distribution of errors is important. In this work, we propose a flexible framework to produce a family of bounds on quantiles of the loss distribution incurred by a predictor. Our method takes advantage of the order statistics of the observed loss values rather than relying on the sample mean alone. We show that a quantile is an informative way of quantifying predictive performance, and that our framework applies to a variety of quantile-based metrics, each targeting important subsets of the data distribution. We analyze the theoretical properties of our proposed method and demonstrate its ability to rigorously control loss quantiles on several real-world datasets.
translated by 谷歌翻译